Skip to Content, Navigation, or Footer.
Thursday, May 2, 2024
The Observer

University responds to emergence of ChatGPT in education

“​​The University of Notre Dame’s campus is buzzing with the recent emergence of artificial intelligence, but its implementation has sparked concerns among students and faculty about the potential loss of jobs and ethical considerations.”

That introduction wasn’t written by The Observer. Prompted with brief instructions to provide a lede — in AP style — for this story, the artificial intelligence (AI) chatbot ChatGPT offered the preceding paragraph, delivering results in a matter of seconds.

As the spring semester begins, an increasing number of conversations in classrooms, faculty offices and dorm rooms have been occurring around the potential of AI in education. In a communication to faculty, sent by the Office of Academic Standards (OAS), the chatbot is described as “a large language model, which generates text from prompts by predicting what sentences should follow prior sentences based on historical correlations of words.”

The University first took notice of ChatGPT in mid-December when a student was caught in a computer class final using the site, according to OAS director and faculty honor code officer Ardea Russo. 

The conversational software is often shocking in its speed and capability. Directed by The Observer, ChatGPT created detailed course syllabi with week-by-week specific readings, thematic poems and even songs in the voice of particular songwriters about specific topics. 

“I sit behind a desk, with my pen in hand / I’m searching for the truth, in a world so grand,” the chatbot’s simulated Taylor Swift wrote — in less than a second — in the opening of a song about student journalism.

ChatGPT is just one piece of the ongoing artificial intelligence revolution that threatens to remake the ways in which much of the professional and academic worlds function. 

For instance, Google has developed an AI model that generates musicbased on any text you give it; DALL-E, a project of OpenAI, the same research lab that developed ChatGPT, can create AI art with strikingly specific results. OpenAI has entered into a contract with Microsoft search engine Bing, shaking up the future of online searches. Already, most interviews conducted by The Observer are automatically transcribed using a program called Otter.ai

University response

Russo said she has been the point person on the administration’s response to ChatGPT. Under the direction of vice president and associate provost for undergraduate education Fr. Dan Groody, she has convened a faculty working group and sent two communications out to faculty members on the matter. The working group has convened experts from “all over the University,” Russo said, including two experts who work specifically on generative AI. 

Russo described the mixed reaction to ChatGPT among both faculty and administration.

“I think there are concerns and excitement,” she said, adding that “the reason I was immediately concerned about it was because of the academic integrity side of it. I think it could be a really cool technology to use. I’m not opposed to it in general. The reason I wanted to start working on it right away was because I was concerned about students using it as a shortcut, rather than as a tool to their learning.”

In her communications to faculty, Russo has outlined two approaches to addressing ChatGPT. First, creating assignments that use the chatbot as a part of the assignment itself, or secondly, designing “assignments that are ChatGPT-proof.” In guidance to faculty, Russo wrote that “the more specific your assignments are, the less ChatGPT can do.”

Russo emphasized the variability of ChatGPT response quality. The faculty guidance says that “even when the responses given are technically correct, the quality of the content varies greatly. Sometimes it does extremely good work and other times it does not.”

How ChatGPT works

ChatGPT uses a generative AI model, Nitesh Chawla said, referring to algorithms that can be used to create new content, including audio, code, images, text, simulations and videos.

Chawla is the director of the Lucy Family Institute for Data and Society and a professor of computer science and engineering at Notre Dame. He explained that like existing search engines, ChatGPT can be used to find answers to user questions. But producing a unique response in whichever form the user asks for makes ChatGPT, in Chawla’s words, “a search engine on steroids.”

ChatGPT is an “engineering marvel,” Chawla said, but it cannot actively seek out new information in the way that humans do.

“If you are given a situation that you have never ever encountered before in your life, you would go, ‘oh, I need to learn it,’” Chawla said. “Now, that is what ChatGPT is not doing. ChatGPT is basically saying, ‘you have taught me everything that I could be taught and I will answer based on what I have been taught.’”

Chawla described how the AI sources its learning through human training.

“ChatGPT is, in the simplistic way, a massive language model that has been trained with an extremely large volume of text or documents, which has also been trained with some human feedback into it, which has allowed it to learn or correct itself,” Chawla said.

Russo emphasized to The Observer that ChatGPT can, on occasion, very confidently provide incorrect information.

“The overall accuracy of ChatGPT is something we should pay attention to. How reliable is it really? I guess we don't know that yet,” she said.

Chawla also expressed concern about the accuracy of ChatGPT and similar language models.

“ChatGPT will string words together based on what it has seen,” he said. “Now, what if those answers or their responses are not grounded?”

“We have to really be very careful and say, ‘ChatGPT is a valid tool for functions A, B and C. Do not use it beyond that,’” Chawla said. “We haven’t put those guardrails up yet.”

Generative AI in the classroom

The administration’s largely open-ended approach has allowed faculty to take disparate approaches to ChatGPT and other AI tools. While some have outright banned usage of the site in their classes, Andrew Gould, a political science professor, wrote new sections into his syllabi about AI tools, allowing students to consult the program.

“You may consult artificial intelligence (AI) technology such as ChatGPT. You must still convey the truth about your sources and the truth about your own contributions to the essay,” Gould’s “European Politics” syllabus specifies.

“However, AI technologies have not been trained on material about recent events. Moreover, AI technologies can produce output that is incorrect. If you quote or paraphrase from AI output in your written work, you must cite the AI source.”

AI technologies “can respond to queries with useful summaries and syntheses of conventional wisdom,” Gould told The Observer. 

“I found that asking [short response] questions that are similar to the kinds of questions I asked my students, the very good ones that I’ve gotten from ChatGPT seem like B+ or B answers to me, but very good responses,” Gould said.

When asked about the possibility of a student attempting to pass off a ChatGPT essay as their own work, Gould said he has “zero” concern.

“​​It’s very difficult to, in an unacknowledged way, use ChatGPT, add some course-specific material and not reveal that ChatGPT played a role in formulation of the argument or the evidence or the overall structure,” he said.

Dan Lindley, another political science professor, disagrees, forbidding use of generative AI in his classes. He said the development of AI in education is taking academia “by storm,” and called the recent developments “a frightful prospect” and “bad for education.”

“I think it’s a potential threat to the learning process. Anytime students can take the easy way out, it’s not as good as the hard way in,” Lindley said. “Learning how to write is not easy, and learning how to write is associated with clarifying your own thoughts and trying to simplify things that are difficult. And ChatGPT takes that all away.”

Gould said that in his experimentation with the technology, there are gaps in the site’s current ability. 

“Asking questions that really take some expertise, it seems to fall flat, so I would not be impressed if the student said in an email, ‘here’s this comment’ [in response to a course question]. I would think the student didn’t really get it,” Gould said.

For instance, Gould said that if prompted to provide a realist theory of how the European Union formed, ChatGPT provides a decent, if insufficient argument.

“You don’t get to a systematic answer,” Gould said. “It’ll be true things, but there’s not some overarching reasoning to it.”

He’s nonetheless impressed with the site’s abilities to work so quickly.

“But getting a B+ in a half a second or less. That’s pretty impressive. Like you could say, ‘oh gee,’ but to me, it seems pretty powerful. And then areas outside of my expertise, the answers seem great,” he said.

Challenging status quo education

Susan Blum, an anthropologist who most recently wrote the book “Ungrading: Why Rating Students Undermines Learning (and What to Do Instead),” said the emergence of AI prompts larger questions about the education system itself.

“We talk about academic integrity, but there’s really a deeper issue that we almost never talk about, which is, what is the purpose of education? Why are the students there? What do they actually want to get out of what they’re learning,” she said.

Blum, who’s also written a book specifically about plagiarism and college culture, approached the issue of AI being used in the classroom with a retrospective view of technology affecting educational environments throughout her lifetime. For example, she said she remembers the advent of calculators, which some worried would have a detrimental effect on students’ abilities in math.

“‘You had to do the math yourself by hand, because students have to learn how to do math.’ Well, maybe they do, maybe they don’t, but I use a calculator all the time,” Blum said. “People would think now that it’s a very silly argument that we should forbid calculators.”

“I see ChatGPT as another development in the continuous invention of new technologies that will have a role to play in our lives. I see this as an educational problem, not an ethical problem,” she added.

Gould similarly believes that generative AI programs “have the potential to transform radically the nature of work throughout the economy throughout the world,” and for professors to implement a “blanket policy not to consult it or use it is a mistake.”

“Do we sit students in a room and have them write by hand so that they can’t consult anything? Some people are proposing that,” Blum said. “Or maybe we don’t ask boring questions that students don’t want to write. Maybe we have to really rethink our teaching.”

Blum said AI technologies will be practical for students hoping to achieve high test scores, making it so appealing for students that a ban might not be effective.

“Until we have more interesting stuff, this is going to be something that students turn to, and I think forbidding it won’t work,” Blum said.

The University is conscious of those ideas. Russo said that courses should have a deeper aim to encourage learning among students, above the simple pursuit of grades.

“I think that the more motivated they are to learn the material because it’s interesting and relevant, the less they’ll want to go online and just turn in something. I feel like our students should want to be better than a machine,” Russo said. 

“And so I’m hoping that that will be enough to deter students. You know, when you’re at a dinner with friends and a conversation topic comes up, you’re gonna want to chime in on the conversation, not be like, ‘well, let me see, let me put this into ChatGPT and see what ChatGPT thinks about it,’” she added.

Implications for the future

Russo, Chawla, Blum, Gould and Lindley all shared an agreement that generative AI is still in its infancy and will continue to grow and adapt. 

“I think there’s a general awareness that we're in a very early period of ChatGPT and I understand there’s a new version coming out, which will be even better. I know that the current version doesn’t know anything that happened past 2021, but the new version will be updated. So I think there’s a general awareness that we want to kind of wait and see where it goes,” Russo said.  

Lindley argued that the future of AI’s implications range far beyond chatbots, discussing potential dangers of AI that he has encountered in his fields of expertise.

“If you were studying weapons, you’d be kind of attuned to what might happen with AI, because there’s going to be autonomous weapons out there. And what are their rules going to be? How are they going to be programmed to kill?” Lindley said. “Yes, they’re going to kill and then what’s gonna happen?”

Gould said ChatGPT will continue to improve and get rid of current flaws in the content it produces. “That’s why I think we should engage, not prohibit,” he said. 

He discussed the broader societal impacts of such technology, which he predicts will rapidly take shape in the years to come.

“I think we’re just at the beginning of figuring out what the impact is. I have shared with seminar students my concern that employers hire us for our skills and abilities to do things for them. They do not hire us for our emotions,” Gould said.

“So I think we, and people entering the job market, have to ask ourselves, ‘what can I do that AI cannot do?,’ or ‘what can I do with AI that AI cannot do by itself?’ That, to me, seems like a pretty serious question. And so yeah, there’s the danger that AI can replace the kind of general skills and intellectual work that we train our students for.”

Contact Liam Price at lprice3@nd.edu and Isa Sheikh at isheikh@nd.edu